Chapter 4 - clustering and classification

About the data

We use the ‘Boston’ dataset from the MASS package. This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass. It contains variables about housing in Boston; more info here

#load the data & first glimpses
library(MASS)
data("Boston")
glimpse(Boston)
## Observations: 506
## Variables: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, ...
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5,...
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, ...
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524...
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172...
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0,...
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605...
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, ...
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311,...
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, ...
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60...
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.9...
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, ...

The data contains 506 rows and 14 columns, i.e. variables. All variables are numeric (type ‘dbl’) except ‘chas’ and ‘rad’, that are ‘int’ and appear as factors.

#Visualize the histograms
Boston %>% gather(key, value) %>%
  ggplot(., aes(x=value)) + geom_histogram(bins = 30) + 
  facet_wrap("key", scales = "free", shrink = T)

#visualize the correlations. First, calculate the p values for correlations
p.mat <- cor.mtest(Boston, method = "spearman", exact = FALSE)

#then, calculate and plot correlations. The function leaves out insignificant correlations
cor(Boston, method = "spearman") %>% corrplot(method = "circle", 
                         type = "upper", 
                         tl.cex = 1.3, cl.pos = "b",
                         p.mat = p.mat$p, sig.level = 0.05, insig = "blank",
                         diag = FALSE)


In many variables, the distribution is rather skewed, or a single value is heavily represented. “Chas” indeed is a categorical variable with levels 0 and 1.

Concerning correlations, many variables appear to be correlated. For example, crime appears to be most associated (inversely correlated) with the distance to employment centers. Note that the circles represent spearman’s correlation coefficients. “Chas” is not correlated with the values, as it is a binomial variable.

Editing the data

let’s edit the data so that we can do some clustering and classification.

#scale Boston and turn it into a data frame. The mean of all variables becomes 0 and values are SD's from center. 
scaled_b <- Boston %>% scale %>% as.data.frame
summary(scaled_b)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865
#create a factor of crime rate. With "mutate(crim = crim...)" we override the old values of 'crim' and we don't need to drop any old variables
bins <- quantile(scaled_b$crim)
scaled_b <- scaled_b %>% mutate(crim = cut(scaled_b$crim, 
                               breaks = bins, 
                               include.lowest = TRUE, 
                               label = c("low", "med_low", "med_high", "high")))


# Divide the dataset to 'train' and 'test', 80% and 20%
n <- nrow(scaled_b)
ind <- sample(n, size = 0.8*n)

train <- scaled_b[ind,]
test <- scaled_b[-ind,]

Linear discriminant analysis

lda_boss <- lda(crim ~ ., data = train)

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}
#numeric classes of crime rates
classes <- as.numeric(train$crim)

# plot the lda results
plot(lda_boss, dimen = 2, col = classes, pch = classes)
lda.arrows(lda_boss, myscale = 2)

#test predicted and observed classes
observed <- test$crim
predicted <- predict(lda_boss, newdata = test)

crosstab <- table(correct = observed, predicted = predicted$class)
crosstab
##           predicted
## correct    low med_low med_high high
##   low       12       7        1    0
##   med_low    8      18        5    0
##   med_high   0      10       16    2
##   high       0       0        0   23


“High” crime rates appear to be correctly predicted. In other classes, around 50% of predictions are correct, but most wrong predictions are in adjacent classes of crime rate (e.g. correct class med_low, predicted class med_high).

K-means

Let’s reload the data for clustering.

#reload & scale Boston
data('Boston') 
boston_scaled <- Boston %>% scale %>% as.data.frame 

#euclidean distances of Boston
dist_eu <- dist(boston_scaled)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4620  4.8240  4.9110  6.1860 14.4000
#how many centroids should we have?
set.seed(123)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')


the optimal number of clusters could be 2 because the steepest drop ends there. Let’s however select 3. The sum of squared distances drops nicely even after that.

# run the K-means cluster analysis again with 3 centroids and plot the results
kmc <- kmeans(boston_scaled, centers = 3)
Boston$cluster <- as.factor(kmc$cluster)
pairs(Boston[5:13], col = Boston$cluster, pch = 19, cex = 0.2)

Boston %>% gather(key, value, -cluster) %>%
  ggplot(., aes(x = key, y = value, group = cluster, col = cluster)) + geom_boxplot() + 
  facet_wrap("key", scales = "free", shrink = TRUE)

First figure shows the variable pairs visualized with scatter plots. The biggest differences between the 3 clusters are in variables age, criminal rates, distance to employment centers, proportion of lower-status residents (lstat), value of houses (medv), amount of nitric gas (nox), students to pupils -ratio (pratio) and so on.

BONUS

set.seed(123)
# Create another K-means clustering with 5 centroids. boston_scaled exists.
data("Boston")
boston_scaled <- Boston %>% scale %>% as.data.frame
kmc_bonus <- kmeans(boston_scaled, centers = 5)
boston_scaled$cluster <- kmc_bonus$cluster
lda_bonus <- lda(cluster ~ ., data = boston_scaled)

# visualisation. Using the same functions than in previous sections. "col" and "pch" are set to boston$clusters
plot(lda_bonus, dimen = 2, col = boston_scaled$cluster, pch = boston_scaled$cluster)
lda.arrows(lda_bonus, myscale = 2.5)

A couple of variables stands out in these clusters - Black and crim seem to contribute as individual variables. It should be noted that the results were very varying, before adding “set.seed(123) -line.”

SUPER BONUS

library(plotly)

#copy-pasting the code
model_predictors <- dplyr::select(train, -crim)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda_boss$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda_boss$scaling
matrix_product <- as.data.frame(matrix_product)

# k-means clustering for colors
kmc_superbonus <- kmeans(model_predictors, centers = 5)
model_predictors$cluster <- kmc_superbonus$cluster

# 3D plots
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, color = train$crim)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, color = model_predictors$cluster)